563 research outputs found

    Bayesian log-Gaussian Cox process regression: applications to meta-analysis of neuroimaging working memory studies

    Full text link
    Working memory (WM) was one of the first cognitive processes studied with functional magnetic resonance imaging. With now over 20 years of studies on WM, each study with tiny sample sizes, there is a need for meta-analysis to identify the brain regions that are consistently activated by WM tasks, and to understand the interstudy variation in those activations. However, current methods in the field cannot fully account for the spatial nature of neuroimaging meta-analysis data or the heterogeneity observed among WM studies. In this work, we propose a fully Bayesian random-effects metaregression model based on log-Gaussian Cox processes, which can be used for meta-analysis of neuroimaging studies. An efficient Markov chain Monte Carlo scheme for posterior simulations is presented which makes use of some recent advances in parallel computing using graphics processing units. Application of the proposed model to a real data set provides valuable insights regarding the function of the WM

    ALE Meta-Analysis Workflows Via the Brainmap Database: Progress Towards A Probabilistic Functional Brain Atlas

    Get PDF
    With the ever-increasing number of studies in human functional brain mapping, an abundance of data has been generated that is ready to be synthesized and modeled on a large scale. The BrainMap database archives peak coordinates from published neuroimaging studies, along with the corresponding metadata that summarize the experimental design. BrainMap was designed to facilitate quantitative meta-analysis of neuroimaging results reported in the literature and supports the use of the activation likelihood estimation (ALE) method. In this paper, we present a discussion of the potential analyses that are possible using the BrainMap database and coordinate-based ALE meta-analyses, along with some examples of how these tools can be applied to create a probabilistic atlas and ontological system of describing function–structure correspondences

    Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation

    Get PDF
    Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis

    Exploring the neural correlates of (altered) moral cognition in psychopaths

    Get PDF
    Abstract Research into the neurofunctional mechanisms of psychopathy has gathered momentum over the last years. Previous neuroimaging studies have identified general changes in brain activity of psychopaths. In an exploratory meta‐analysis, we here investigated the neural correlates of impaired moral cognition in psychopaths. Our analyses replicated general effects in the dorsomedial prefrontal cortex, lateral prefrontal cortex, fronto‐insular cortex, and amygdala, which have been reported recently. In addition, we found aberrant brain activity in the midbrain and inferior parietal cortex. Our preliminary findings suggest that alterations in both regions may represent more specific functional brain changes related to (altered) moral cognition in psychopaths. Furthermore, future studies including a more comprehensive corpus of neuroimaging studies on moral cognition in psychopaths should re‐examine this notion

    What Can Computational Models Contribute to Neuroimaging Data Analytics?

    Get PDF
    Over the past years, nonlinear dynamical models have significantly contributed to the general understanding of brain activity as well as brain disorders. Appropriately validated and optimized mathematical models can be used to mechanistically explain properties of brain structure and neuronal dynamics observed from neuroimaging data. A thorough exploration of the model parameter space and hypothesis testing with the methods of nonlinear dynamical systems and statistical physics can assist in classification and prediction of brain states. On the one hand, such a detailed investigation and systematic parameter variation are hardly feasible in experiments and data analysis. On the other hand, the model-based approach can establish a link between empirically discovered phenomena and more abstract concepts of attractors, multistability, bifurcations, synchronization, noise-induced dynamics, etc. Such a mathematical description allows to compare and differentiate brain structure and dynamics in health and disease, such that model parameters and dynamical regimes may serve as additional biomarkers of brain states and behavioral modes. In this perspective paper we first provide very brief overview of the recent progress and some open problems in neuroimaging data analytics with emphasis on the resting state brain activity. We then focus on a few recent contributions of mathematical modeling to our understanding of the brain dynamics and model-based approaches in medicine. Finally, we discuss the question stated in the title. We conclude that incorporating computational models in neuroimaging data analytics as well as in translational medicine could significantly contribute to the progress in these fields

    Prefrontal involvement in imitation learning of hand actions : effects of practice and expertise.

    Get PDF
    In this event-related fMRI study, we demonstrate the effects of a single session of practising configural hand actions (guitar chords) on cortical activations during observation, motor preparation, and imitative execution. During the observation of non-practised actions, the mirror neuron system (MNS), consisting of inferior parietal and ventral premotor areas, was more strongly activated than for the practised actions. This finding indicates a strong role of the MNS in the early stages of imitation learning. In addition, the dorsolateral prefrontal cortex (DLPFC) was selectively involved during observation and motor preparation of the non-practised chords. This finding confirms Buccino et al.’s (2004a) model of imitation learning: for actions that are not yet part of the observer’s motor repertoire, DLPFC engages in operations of selection and combination of existing, elementary representations in the MNS. The pattern of prefrontal activations further supports Shallice’s (2004) proposal of a dominant role of the left DLPFC in modulating lower-level systems, and of a dominant role of the right DLPFC in monitoring operations

    Brain Activation in Primary Motor and Somatosensory Cortices during Motor Imagery Correlates with Motor Imagery Ability in Stroke Patients

    Get PDF
    Aims. While studies on healthy subjects have shown a partial overlap between the motor execution and motor imagery neural circuits, few have investigated brain activity during motor imagery in stroke patients with hemiparesis. This work is aimed at examining similarities between motor imagery and execution in a group of stroke patients. Materials and Methods. Eleven patients were asked to perform a visuomotor tracking task by either physically or mentally tracking a sine wave force target using their thumb and index finger during fMRI scanning. MIQ-RS questionnaire has been administered. Results and Conclusion. Whole-brain analyses confirmed shared neural substrates between motor imagery and motor execution in bilateral premotor cortex, SMA, and in the contralesional inferior parietal lobule. Additional region of interest-based analyses revealed a negative correlation between kinaesthetic imagery ability and percentage BOLD change in areas 4p and 3a; higher imagery ability was associated with negative and lower percentage BOLD change in primary sensorimotor areas during motor imagery

    Towards increasing the clinical applicability of machine learning biomarkers in psychiatry.

    Full text link
    Due to a lack of objective biomarkers, psychiatric diagnoses still rely strongly on patient reporting and clinician judgement. The ensuing subjectivity negatively affects the definition and reliability of psychiatric diagnoses1,2. Recent research has suggested that a combination of advanced neuroimaging and machine learning may provide a solution to this predicament by establishing such objective biomarkers for psychiatric conditions, improving the diagnostic accuracy, prognosis and development of novel treatments3.These promises led to widespread interest in machine learning applications for mental health4, including a recent paper that reports a biological marker for one of the most difficult yet momentous questions in psychiatry—the assessment of suicidal behaviour5. Just et al. compared a group of 17 participants with suicidal ideation with 17 healthy controls, reporting high discrimination accuracy using task-based functional magnetic resonance imaging signatures of life- and death-related concepts3. The authors further reported high discrimination between nine ideators who had attempted suicide versus eight ideators who had not. While being a laudable effort into a difficult topic, this study unfortunately illustrates some common conceptual and technical issues in the field that limit translation into clinical practice and raise unrealistic hopes when the results are communicated to the general public.From a conceptual point of view, machine learning studies aimed at clinical applications need to carefully consider any decisions that might hamper the interpretation or generalizability of their results. Restrictiveness to an arbitrary setting may become detrimental for machine learning applications by providing overly optimistic results that are unlikely to generalize. As an example, Just et al. excluded more than half of the patients and healthy controls initially enrolled in the study from the main analysis due to missing desired functional magnetic resonance imaging effects (a rank accuracy of at least 0.6 based on all 30 concepts). This exclusion introduces a non-assessable bias to the interpretation of the results, in particular when considering that only six of the 30 concepts were selected for the final classification procedure. While Just et al. attempt to address this question by applying the trained classifier to the initially excluded 21 suicidal ideators, they explicitly omit the excluded 24 controls from this analysis, preventing any interpretation of the extent to which the classifier decision is dependent on this initial choice.From a technical point of view, machine learning-based predictions based on neuroimaging data in small samples are intrinsically highly variable, as stable accuracy estimates and high generalizability are only achieved with several hundreds of participants6,7. The study by Just et al. falls into this category of studies with a small sample size. To estimate the impact of uncertainty on the results by Just et al., we adapted a simulation approach with the code and data kindly provided by the authors, randomly permuting (800 times) the labels across the groups using their default settings and computing the accuracies. These results showed that the 95% confidence interval for classification accuracy obtained using this dataset is about 20%, leaving large uncertainty with respect to any potential findings.Special care is also required with respect to any subjective choices in feature and classifier settings or group selection. While ad-hoc selection of a specific setting is subjective, testing of different ones and outcome-based post-hoc justification of such leads to overfitting, thus limiting the generalizability of any classification. Such overfitting may occur when multiple models or parameter choices are tested with respect to their ability to predict the testing data and only those that perform best are reported. To illustrate this issue, we performed an additional analysis with the code and data kindly provided by Just et al. More specifically, in the code and the manuscript, we identified the following non-exhaustive number of prespecified settings: (1) removal of occipital cortex data; (2) subdivision of clusters larger than 11 mm; (3) selection of voxels with at least four contributing participants in each group; (4) selection of stable clusters containing at least five voxels; (5) selection of the 1,200 most stable features; and (6) manual copying and replacing of a cluster for one control participant. Importantly, according to the publication or code documentation, all of these parameters were chosen ad hoc and for none of these settings was a parameter search performed. We systematically evaluated the effect of each of these choices on the accuracy for differentiation between suicide ideators and controls in the original dataset provided by Just et al. As shown in Fig. 1, each of the six parameters represents an optimum choice for differentiation accuracy in this dataset, with any (even minor) change often resulting in substantially lower accuracy estimates. Similarly, data leakage may also contribute to optimistic results when information outside the training set is used to build a prediction model. More generally, whenever human interventions guide the development of machine learning models for the prediction of clinical conditions, a careful evaluation and reporting of any researcher’s degrees of freedom is essential to avoid data leakage and overfitting. Subsequent sharing of data processing and analysis pipelines, as well as collected data, is a further key step to increase reproducibility and facilitate replication of potential findings
    corecore